Goto

Collaborating Authors

 language gan



To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs

Neural Information Processing Systems

Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability.In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.


ColdGANs: Taming Language GANs with Cautious Sampling Strategies

Neural Information Processing Systems

Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences that lack of coherence, factualness, and are prone to repetitions. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias. Another problem lies in considering only the reference text as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) could mitigate those limitations. Nonetheless, the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to under-perform MLE.


ColdGANs: Taming Language GANs with Cautious Sampling Strategies Supplementary Material

Neural Information Processing Systems

We used a single RTX 2080 Ti GPU. While T5-small underperforms its larger version, T5-11B, the latter has 11 billion parameters. Gabon and South Africa, are ranked 119th and 121st, respectively . ANSWER: 119th HUMAN: What is Gabon ' s ranking? ColdGAN: What is Gabon ' s rank on the HDI?



Review for NeurIPS paper: ColdGANs: Taming Language GANs with Cautious Sampling Strategies

Neural Information Processing Systems

While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are needed. If you get rid of pretraining/initialization from T5/BART, would this method work? 2. This work requires MLE pretraining, while prior work "Training Language GANs from Scratch" does not. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum.


Review for NeurIPS paper: ColdGANs: Taming Language GANs with Cautious Sampling Strategies

Neural Information Processing Systems

This paper proposes a new method to stabilizing GAN for language generation. After author rebuttal and reviewer discussion, the scores are still divergent. By the end, it received 2 reject and 2 accept recommendations. On one hand, the main criticism about this paper lies in the existence of many hyper-parameters/tricks to tune. On the other hand, the reviewers appreciate the additional clarification and experiments in the rebuttal, and think this paper provides a careful and insightful analysis on text GANs.


To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs

Neural Information Processing Systems

Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability.In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.


ColdGANs: Taming Language GANs with Cautious Sampling Strategies

Neural Information Processing Systems

Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences that lack of coherence, factualness, and are prone to repetitions. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias. Another problem lies in considering only the reference text as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) could mitigate those limitations. Nonetheless, the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to under-perform MLE.


InitialGAN: A Language GAN with Completely Random Initialization

Ren, Da, Li, Qing

arXiv.org Artificial Intelligence

Text generative models trained via Maximum Likelihood Estimation (MLE) suffer from the notorious exposure bias problem, and Generative Adversarial Networks (GANs) are shown to have potential to tackle this problem. Existing language GANs adopt estimators like REINFORCE or continuous relaxations to model word probabilities. The inherent limitations of such estimators lead current models to rely on pre-training techniques (MLE pre-training or pre-trained embeddings). Representation modeling methods which are free from those limitations, however, are seldomly explored because of their poor performance in previous attempts. Our analyses reveal that invalid sampling methods and unhealthy gradients are the main contributors to such unsatisfactory performance. In this work, we present two techniques to tackle these problems: dropout sampling and fully normalized LSTM. Based on these two techniques, we propose InitialGAN whose parameters are randomly initialized in full. Besides, we introduce a new evaluation metric, Least Coverage Rate, to better evaluate the quality of generated samples. The experimental results demonstrate that InitialGAN outperforms both MLE and other compared models. To the best of our knowledge, it is the first time a language GAN can outperform MLE without using any pre-training techniques.